专利摘要:
The present invention relates to a computing device (700) for detecting an object of interest in an image, comprising a memory device (710) and a processor (715) for processing image data by evaluating a pixel contrast, by identifying neighboring pixels above a predetermined contrast value, which represent an object of potential interest, by filtering the pixels on the basis of color information to identify areas where there is a color edge transition, applying a convolution filter mask to produce pixel regions having a localized contrast concentration and color variations, by applying a threshold filter to identify pixel regions having values that are greater than a predetermined threshold value and grouping them into one or more groups representing an object of interest, and generating an alert for notifying the detection to the operator.
公开号:FR3042630A1
申请号:FR1658153
申请日:2016-09-01
公开日:2017-04-21
发明作者:Aaron Y Mosher;Charles Robert Kreienheder
申请人:Boeing Co;
IPC主号:
专利说明:

SYSTEMS AND METHODS FOR DETECTION OF OBJECTS
PRIOR ART
The field of the invention generally relates to the visual analysis of objects and, more particularly, the detection of objects in an image using a computer-based methodology.
At least some existing systems for large area searches require a user to inspect a scene with his or her eyes in person or by monitoring a video screen. However, the user's attention may be distracted and it can be tiring to watch a screen for long periods of time without losing attention. As a result, automated systems can be used to automatically identify objects of interest. However, at least some known automated systems are computationally heavy, requiring high performance computing devices.
ABSTRACT
In one aspect, a computing device for detecting an object of interest in an image is provided. The computing device includes a memory device and a processor communicatively coupled to the memory device and having encoded instructions that, when executed, are configured to process image data by evaluating a pixel contrast to facilitate processing. detecting an edge transition in an image, identifying neighboring pixels that are above a predetermined contrast value, which represent an object of potential interest, by filtering the pixels based on color information to identify areas of high contrast where there is a color edge transition, applying a convolution filter mask to produce pixel regions having a localized concentration of contrast and color variations, applying a threshold filter for identify the pixel regions having values that are greater than a predetermined threshold value and grouping the pixel regions identified in one or more groups representing an object of interest, and generating an alert to notify an operator that the object of interest in the image has been detected.
In another aspect, a method for identifying an object of interest in an image is provided. The method includes evaluating, using a computing device, a pixel contrast to facilitate detection of an edge transition in an image, by identifying neighboring pixels that are above a predetermined contrast value , which represent an object of potential interest, filtering, using the computing device, pixels on the basis of color information to identify areas of high contrast where there is a color edge transition, application, using the computing device, a convolution filter mask to produce pixel regions having a localized contrast concentration and color variations, applying, using the computing device, a threshold filter to identify regions of pixels having values that are greater than a predetermined threshold value and grouping pixel regions identified in one or more groups representing an object of interest, and generating, using the computing device, an alert to notify an operator that the object of interest in the image has been detected.
In yet another aspect, a system for detecting an object of interest in an image is provided. The system includes a camera configured to acquire an image and a computing device communicatively coupled to the camera and configured to evaluate a pixel contrast to facilitate detection of an edge transition in an image, identifying neighboring pixels that are above a predetermined contrast value, which represents an object of potential interest, filter the pixels based on color information to identify high contrast areas where there is a color edge transition, apply a convolution filter mask for producing pixel regions having a localized concentration of contrast and color variations, applying a threshold filter for identifying pixel regions having values that are greater than a predetermined threshold value and grouping the regions pixels identified in one or more groups representing an object of interest, and generating an alert to notify an operator that the object of interest in the image has been detected.
BRIEF DESCRIPTION OF THE DRAWINGS
Figure 1 is an example of an image.
Figure 2 is a flowchart of an exemplary method for detecting an object of interest in the image shown in Figure 1.
Figure 3 is an example of an image that can be processed using the method shown in Figure 2.
Figure 4 shows the image of Figure 3 after the application of a Laplace transform.
Figure 5 shows the image of Figure 3 after a color filtering process.
Figure 6 shows the image of Figure 3 after the application of the large mask filter convolution.
Fig. 7 is a block diagram of an example of a computing device that can be used to execute the method shown in Fig. 2.
Figure 8 is an example of an image.
Figure 9 illustrates a horizon identified from the image shown in Figure 8.
DETAILED DESCRIPTION
The implementations described below facilitate the detection of an object of interest in an image. To detect the object, an image processing includes evaluating a pixel contrast to facilitate detection of an edge transition, pixel filtering based on color information, application of a convolution filter mask and applying a threshold filter to identify pixel regions for the cluster in one or more groups representing an object of interest. When one or more objects of interest are detected, an appropriate alert is generated to notify the detection to a user. In addition, the location of each object of interest may be indicated inside the image.
Figure 1 is an exemplary image 100. Image 100 may be an image of a video or may be an independent static image (eg, a photograph). Image 100 includes an object of interest 102 visible on a surface of a body of water 104 (for example, an ocean or the like). In the exemplary implementation, the object of interest 102 is a boat. As will be appreciated by those skilled in the art, the object of interest 102 may be any type of suitable object in a suitable environment. For example, objects of interest 102 may be people, buildings, animals, devices, etc., and environments could include soil, water, sky, etc. For the sake of clarity, the object of interest 102 is identified by a box in the image 100.
Figure 2 is a flowchart of an exemplary method 200 for detecting an object, such as object of interest 102. Method 200 is performed by a computer image processing apparatus, as described in detail below. below. Method 200 facilitates the detection of one or more objects of interest in an image, such as image 100.
In block 202, an image is received from a camera or other optical device. As noted above, the image could be an independent still image or an individual image from a video. In block 204, a Gaussian filter is applied to the image to facilitate the removal of high frequency artifacts in the image.
In block 206, a Laplace transform is applied to the image. The Laplace transform is applied for edge detection purposes. Specifically, neighboring pixels that are above a predetermined contrast value are identified because they may represent an object of potential interest. The Laplace transform also preserves color information in the image.
In block 208, a color space conversion is applied to the image. Successively, in block 210, the image is filtered based on color information. Specifically, in the exemplary implementation, high contrast red, green, and blue areas are filtered to identify color edge transitions.
In block 212, a large mask filter convolution is applied to the image. This eliminates background noise from the image and generates localized concentrations of contrast / color changes. Finally, after the image processing steps of the blocks 204, 206, 208, 210 and 212 have been performed, a thresholding operation is performed on the image to isolate an object of interest in the block 214. In FIG. implementation example, the thresholding operation uses a thresholding filter to identify the pixel regions with values greater than a predetermined threshold value and groups the pixel regions identified in one or more groups representing at least one object interest. The thresholding operation can be adjustably adjusted to establish a sensitivity for the detection of objects of interest.
In block 216, in the exemplary implementation, a contour frame (or other visual index) is drawn around the isolated object in the image, so that the object is easily located inside. of the image for a user observing the image (for example, on a display device). Alternatively, the outline frame may not be drawn (for example, based on a user preference). For successive images, the flow returns to block 202.
In some implementations, the method 200 includes different and / or additional steps. For example, to facilitate the detection of lifeboats and life jackets in the water, the method 200 may include a user-selectable mode that filters the image for any red colored objects. In addition, as described below, a "line splicing" step can be used to automatically detect a horizon on the image.
Fig. 3 is an example of an image 300 that can be processed using the method 200. As shown in Fig. 3, the image 300 includes an object of interest 302. For the sake of clarity, an outline frame 304 is already shown as plotted in image 300. However, one skilled in the art will appreciate that according to method 200, outline frame 304 is not drawn on image 300 until object 302 is detected in picture 300.
FIG. 4 shows the image 300 after the application of the Laplace transform of the block 206, FIG. 5 represents the image 300 after the color filtering of the block 210, FIG. 6 represents the image 300 after the application. of the large mask mask convolution of block 212. In the exemplary implementation, for color filtering (shown in Figure 5), a user selectable "color channel" command may be used to detect specific colors or all the colors combined. For example, color filtering can be set to detect red colors to facilitate the detection of lifeboats and lifejackets. In addition, other settings can be used to detect other marine objects (for example, markers and navigation buoys are usually green or red).
In the exemplary implementation, the method 200 is performed using a "system on a chip" GPU processor. This processor includes a GPGPU and a central processing unit (CPU) in the same housing. Accordingly, different process steps 200 may be performed by the GPGPU and / or the CPU. For example, in the exemplary implementation, blocks 202 and 216 are executed by the CPU and blocks 204, 206, 208, 210, 212 and 214 are executed by the GPGPU. In particular, GPGPU operations can be executed simultaneously with CPU operations. Thus, while a contour frame is drawn in a previous image (i.e. block 216), the next image is already being processed by the GPGPU. This allows the method 200 to be executed with a high computing efficiency.
Figure 7 is a block diagram of the computing device 700 that can be used to execute the method 200 (shown in Figure 2). For example, the computing device 700 may be implemented using a GPUGPU and / or a CPU of a "system on a chip" as described above. The computing device 700 includes at least one memory device 710 and a processor 715 that is coupled to the memory device 710 to execute instructions. In some implementations, executable instructions are stored in the memory device 710. In some implementations, the computing device 700 performs one or more of the operations described herein by programming the processor 715. For example, the processor 715 may be programmed encoding an operation as one or more executable instructions and providing the executable instructions in the memory device 710.
The processor 715 may include one or more processing units (e.g., in a multi-core configuration). In addition, the processor 715 can be implemented using one or more heterogeneous processor systems in which a main processor is present with secondary processors on a single chip. In another illustrative example, the processor 715 may be a symmetric multiprocessor system containing multiple processors of the same type. In addition, the processor 715 can be implemented using any suitable programmable circuit including one or more microcontrollers and systems, microprocessors, reduced instruction set circuits (RISC), application specific integrated circuits (ASICs). ), Programmable Logic Circuits, User Programmable Gate Arrays (FPGAs), and any other circuit capable of performing the functions described herein.
In the exemplary implementation, the memory device 710 is one or more devices for storing and retrieving information such as executable instructions and / or other data. The memory device 710 may include one or more computer readable media such as, without limitation, a dynamic random access memory (DRAM), a static random access memory (SRAM), a solid state disk, and / or a hard disk. The memory device 710 may be configured to store, without limitation, application source code, application object code, portions of source code of interest, portions of object code of interest, configuration data. , runtime events and / or any other type of data.
In the exemplary implementation, the computing device 700 includes a presentation interface 720 that is coupled to the processor 715. The presentation interface 720 presents information to a user 725. For example, the presentation interface 720 may include a display adapter (not shown) that can be coupled to a display device, such as a CRT, a liquid crystal display (LCD), an organic LED display (OLED), and / or or an "electronic ink" display. In some implementations, the presentation interface 720 includes one or more display devices.
In the exemplary implementation, the computing device 700 comprises a user input interface 735. The user input interface 735 is coupled to the processor 715 and receives inputs from the user 725. The interface of user input 735 may include, for example, a keyboard, a pointing device, a mouse, a stylus, a touch panel (for example, a touchpad or a touch screen), a gyroscope, an accelerometer, a position detector and / or an audio user input interface. A single component, such as a touch screen, can function as both a presentation interface display device 720 and a user input interface 735.
The computing device 700, in the exemplary implementation, comprises a communication interface 740 coupled to the processor 715. The communication interface 740 communicates with one or more remote devices. To communicate with remote devices, the communication interface 740 may include, for example, a wired network adapter, a wireless network adapter, and / or a mobile communication adapter.
To facilitate the execution of the method 200, in some implementations, the computing device 700 performs a horizon detection step and ignores data above the horizon during the image processing steps of the method 200. FIG. 8 is an example of an image 800 comprising two objects of interest 802 on a body of water 804. In particular, a large portion of the image 800 is the sky 806 above the body of water 804. When attempting to identify the objects of interest 802, the sky 806 can be ignored. Accordingly, as shown in Fig. 9, the computing device 700 identifies a horizon 810 between the water extent 804 and the sky 806. Successively, in the steps of the method 200, the computing device ignores data above the identified horizon 810, further increasing the computer performance of the method 200.
In addition, horizon detection can help this algorithm or other image processing algorithms by allowing the user to specify objects to search only above or below the horizon. Automatic horizon detection also facilitates the removal of unwanted detections that may otherwise occur at the horizon due to the visual interface of water to the sky and allows filtering to include detections only. horizon (for example, distant ships instead of nearby vessels). Horizon detection can also help in driverless / automated vehicle navigation (for example, horizon position is an important reference during flight).
In some implementations, the computing device 700 also performs tracking functionality. Precisely, after the detection of an object of interest in an image, the computer device 700 follows a position of the object of interest detected on successive images. In addition, in some implementations, the computing device 700 uses or communicates with a classification system to classify (i.e. identify) the object of interest detected, so that the object of interest can be followed continuously even after the disappearance of the sight and the reappearance in successive images.
As described above, the computing device 700 can be implemented using a "system on a chip" processor. In addition, the method 200 is computationally effective. This allows the computing device 700 to have relatively low size, weight and power requirements (SWAP). Accordingly, in some implementations, the computing device 700 and / or portions of the computing device 700 may be located on board a vehicle (for example, a driverless overhead vehicle (UAV) or other driverless vehicle). In such implementations, the scanned images can be acquired using a camera or other optical receiver on the vehicle.
In addition, in some implementations, the image may include data outside the visible spectrum.
In some implementations, the computing device 700 generates one or more real-world outputs for a user. For example, when detecting an object, the computing device 700 may inform a user using appropriate audio-visual techniques. In addition, in some implementations, a user may search for a particular object (for example, a boat). With the aid of a user input device, the user can instruct the computing device 700 to generate an audible and / or visual alert when the particular object is detected.
The systems and methods described herein facilitate the detection of an object of interest in an image. To detect the object, the image processing includes evaluating a contrast of pixels to facilitate the detection of an edge transition, the pixel filtering based on color information, the application of a convolution filter mask and applying a threshold filter to identify pixel regions for the cluster in one or more groups representing an object of interest. When an object of interest is detected, an appropriate alert is generated to notify the detection to a user. When the object of interest is detected in the image, the computing device may generate an audible and / or visual alert to warn a user, together with the overlay of an outline frame or other visual cue that is superimposed on the display of the image around the isolated object in the image, so that the object is easily localizable in the image for a user (for example, on a display device).
In addition, the description includes embodiments according to the following clauses:
Clause 1. Computer apparatus for detecting an object of interest in an image, comprising: a memory device; and a processor communicatively coupled to said memory device and having coded instructions which, when executed, are configured to process image data by: evaluating a pixel contrast to facilitate detection of an edge transition in an image, identifying neighboring pixels that are above a predetermined contrast value, which represent an object of potential interest; filtering the pixels based on color information to identify areas of high contrast where there is a color edge transition; applying a convolution filter mask to produce pixel regions having a localized concentration of contrast and color variations; applying a threshold filter to identify the pixel regions having values that are greater than a predetermined threshold value and grouping the pixel regions identified in one or more groups representing an object of interest; and generating an alert to notify an operator that the object of interest in the image has been detected.
Clause 2. Computer device according to clause 1, wherein said processor is configured to detect the object of interest on a surface of a body of water.
Clause 3. Computer device according to clause 1, wherein said processor is configured to output a visual index superimposed on a display of the image to notify the operator that the object of interest has been detected in the picture.
Clause 4. A computing device according to clause 3, wherein said processor is configured to generate an output including an audible alert to notify the operator that the object of interest has been detected in the image.
Clause 5. A computer device according to clause 1, further comprising a display device communicatively coupled to said processor and configured to display the image and the alert.
Clause 6. A computing device according to clause 5, wherein said processor is configured to generate an outline frame that is superimposed on the image display to notify the operator of the location of the object of interest in the image.
Clause 7. A computing device according to clause 1, wherein said processor comprises a general purpose graphics processing unit (GPGPU) and a central processing unit (CPU) which function as coprocessors for processing image data.
Clause 8. A computing device according to clause 1, wherein the threshold filter is configured to be adjustably adjusted to establish a sensitivity for the detection of objects of interest.
Clause 9. A method for identifying an object of interest in an image, the method comprising: evaluating, using a computing device, a pixel contrast to facilitate detection of an edge transition in an image, by identifying neighboring pixels that are above a predetermined contrast value, which represent an object of potential interest; filtering, using the computing device, pixels based on color information to identify areas of high contrast where there is a color edge transition; applying, using the computing device, a convolutional filter mask to produce pixel regions having a localized concentration of contrast and color variations; applying, using the computing device, a threshold filter for identifying pixel regions having values that are greater than a predetermined threshold value and grouping pixel regions identified in one or more groups representing an object of interest; and generating, using the computing device, an alert to notify an operator that the object of interest in the image has been detected.
Clause 10. A method according to clause 9, further comprising generating a superimposed visual index on a display of the image to notify the operator that the object of interest has been detected in the image.
Clause 11. A method according to clause 10, wherein generating an alert comprises generating an audible alert to notify the operator that the object of interest in the image has been detected, wherein the image represents a surface of a body of water.
Clause 12. A method according to clause 9, further comprising generating a contour frame which is superimposed on a display of the image to notify the operator of the location of the object of interest in the image. .
Clause 13. A method according to clause 9, further comprising adjusting the threshold filter to establish a sensitivity for the detection of objects of interest.
Clause 14. A method according to clause 9, wherein the evaluation, using a computing device, of a pixel contrast comprises the evaluation, using a computing device that includes a general purpose graphics processing unit (GPGPU). ) and a central processing unit (CPU) that function as coprocessors.
Clause 15. The method of clause 9, further comprising: identifying a horizon in the image; and ignoring pixel data above the horizon or below the horizon.
Clause 16. A system for detecting an object of interest in an image, the system comprising: a camera configured to acquire an image; and a computing device communicatively coupled to said camera and configured to: evaluate a pixel contrast to facilitate detection of an edge transition in an image, identifying neighboring pixels that are above a contrast value predetermined, which represent an object of potential interest; filtering the pixels based on color information to identify areas of high contrast where there is a color edge transition; applying a convolution filter mask to produce pixel regions having a localized concentration of contrast and color variations; applying a threshold filter to identify pixel regions having values that are greater than a predetermined threshold value and grouping the pixel regions identified in one or more groups representing an object of interest; and generating an alert to notify an operator that the object of interest in the image has been detected.
Clause 17. A system according to clause 16, wherein said camera and said computing device are installed on a driverless vehicle.
Clause 18. A system according to clause 16, wherein said computing device is configured to detect the object of interest on a surface of a body of water.
Clause 19. A system according to clause 16, wherein said computing device is configured to output a visual index superimposed on a display of the image to notify the operator that the object of interest has been detected in the picture.
Clause 20. A system according to clause 16, wherein said computing device is configured to generate an output containing an audible alert to notify the operator that the object of interest has been detected in the image.
This written description uses examples to present various implementations, including the best mode, to enable those skilled in the art to realize these implementations, including the manufacture and use of any devices or systems and the execution of any incorporated processes. The patentable scope is defined by the claims and may include other examples that occur to those skilled in the art. Such other examples are considered to be within the scope of the claims if they have structural elements that do not differ from the literal formulation of the claims or if they comprise equivalent structural elements with non-substantial differences from the literal formulation demands.
权利要求:
Claims (20)
[1" id="c-fr-0001]
A computing device (700) for detecting an object of interest in an image, comprising: a memory device (710); and a processor (715) communicatively coupled to said memory device (710) and having encoded instructions which, when executed, are configured to process image data by: evaluating a pixel contrast for facilitating the detection of an edge transition in an image (206), by identifying neighboring pixels that are above a predetermined contrast value, which represent an object of potential interest (102); filtering the pixels based on color information to identify high contrast areas where there is a color edge transition (210); applying a convolution filter mask to produce pixel regions having a localized concentration of contrast and color variations (212); applying a threshold filter to identify the pixel regions having values that are greater than a predetermined threshold value and grouping the identified pixel regions in one or more groups representing an object of interest (214); and generating an alert to notify an operator that the object of interest in the image has been detected (216).
[2" id="c-fr-0002]
The computer device (700) of claim 1, wherein said processor (715) is configured to detect the object of interest (102) on a surface of a body of water (104).
[3" id="c-fr-0003]
The computer device (700) of claim 1 or claim 2, wherein said processor (715) is configured to output a visual index superimposed on a display of the image to notify the operator that the object of interest (102) has been detected in the image (100).
[4" id="c-fr-0004]
The computer device (700) according to claim 3, wherein said processor (715) is configured to generate an output including an audible alert to notify the operator that the object of interest (102) has been detected in the computer. image (100).
[5" id="c-fr-0005]
The computer device (700) of any preceding claim, further comprising a display device (740) communicatively coupled to said processor (715) and configured to display the image and the alert.
[6" id="c-fr-0006]
The computer device (700) according to claim 5, wherein said processor (715) is configured to generate a contour frame (304) which is superimposed on the image display (300) to notify the operator the location of the object of interest (302) in the image (300).
[7" id="c-fr-0007]
The computer device (700) according to any one of the preceding claims, wherein said processor (715) comprises a general purpose graphics processing unit (GPGPU) and a central processing unit (CPU) which function as coprocessors to process image data.
[8" id="c-fr-0008]
The computer device (700) of any preceding claim, wherein the threshold filter is configured to be adjustably adjusted to establish a sensitivity for the detection of objects of interest (214).
[9" id="c-fr-0009]
A method (200) for identifying an object of interest (102) in an image (100), the method (200) comprising: evaluating, using a computing device (700), a pixel contrast for facilitating the detection of an edge transition in an image, by identifying neighboring pixels that are above a predetermined contrast value, which represent an object of potential interest (206); filtering, using the computing device (700), pixels based on color information to identify high contrast areas where there is a color edge transition (210); applying, using the computing device (700), a convolution filter mask to produce pixel regions having a localized contrast concentration and color variations (212); applying, using the computing device (700), a threshold filter to identify pixel regions having values that are greater than a predetermined threshold value and grouping pixel regions identified in one or more groups representing an object of interest (214); and generating, using the computing device (700), an alert to notify an operator that the object of interest in the image has been detected (216).
[10" id="c-fr-0010]
The method (200) of claim 9, further comprising generating a superimposed visual index on a display of the image to notify the operator that the object of interest has been detected in the image.
[11" id="c-fr-0011]
The method (200) of claim 10, wherein generating an alert comprises generating an audible alert to notify the operator that the object of interest (102) in the image has been detected. , wherein the image represents a surface of a body of water (104).
[12" id="c-fr-0012]
The method (200) of any one of claims 9 to 11, further comprising generating a contour frame (304) which is superimposed on a display of the image (100) to notify the operator the location of the object of interest (102) in the image (100).
[13" id="c-fr-0013]
The method (200) of any one of claims 9 to 12, further comprising adjusting the threshold filter to establish a sensitivity for the detection of objects of interest (102).
[14" id="c-fr-0014]
The method (200) of any one of claims 9 to 13, wherein evaluating, using a computing device (700), a pixel contrast comprises the evaluation, using a computing device (700). ) which includes a general purpose graphics processing unit (GPGPU) and a central processing unit (CPU) which function as coprocessors.
[15" id="c-fr-0015]
The method (200) of any one of claims 9 to 14, further comprising: identifying a horizon in the image (100); and ignoring pixel data above the horizon or below the horizon.
[16" id="c-fr-0016]
A system for detecting an object of interest (102) in an image (100), the system comprising: a camera configured to acquire an image (202); and a computing device (700) communicatively coupled to said camera and configured to: evaluate a pixel contrast to facilitate detection of an edge transition in an image, identifying neighboring pixels that are above a predetermined contrast value, which represents an object of potential interest (206); filtering the pixels based on color information to identify high contrast areas where there is a color edge transition (210); applying a convolution filter mask to produce pixel regions having a localized concentration of contrast and color variations (212); applying a threshold filter to identify pixel regions having values that are greater than a predetermined threshold value and grouping the identified pixel regions into one or more groups representing an object of interest (214); and generating an alert to notify an operator that the object of interest in the image has been detected (216).
[17" id="c-fr-0017]
The system of claim 16, wherein said camera and said computing device (700) are installed on a driverless vehicle.
[18" id="c-fr-0018]
The system of claim 16 or claim 17, wherein said computing device (700) is configured to detect the object of interest (102) on a surface of a body of water (104).
[19" id="c-fr-0019]
The system of any one of claims 16 to 18, wherein said computing device (700) is configured to output a visual index superimposed on a display of the image (100) to notify the operator that the object of interest (102) has been detected in the image (100).
[20" id="c-fr-0020]
The system of any one of claims 16 to 19, wherein said computing device (700) is configured to generate an output including an audible alert to notify the operator that the object of interest (102) has been detected in the image (100).
类似技术:
公开号 | 公开日 | 专利标题
FR3042630B1|2019-08-30|SYSTEMS AND METHODS FOR DETECTION OF OBJECTS
US10885381B2|2021-01-05|Ship detection method and system based on multidimensional scene features
US10943145B2|2021-03-09|Image processing methods and apparatus, and electronic devices
Duarte et al.2016|A dataset to evaluate underwater image restoration methods
US9959468B2|2018-05-01|Systems and methods for object tracking and classification
US10055648B1|2018-08-21|Detection, classification, and tracking of surface contacts for maritime assets
WO2017074786A1|2017-05-04|System and method for automatic detection of spherical video content
WO2020259118A1|2020-12-30|Method and device for image processing, method and device for training object detection model
CN102156881A|2011-08-17|Method for detecting salvage target based on multi-scale image phase information
CN110033040B|2021-05-04|Flame identification method, system, medium and equipment
Rodin et al.2018|Detectability of objects at the sea surface in visible light and thermal camera images
FR2990038A1|2013-11-01|METHOD AND DEVICE FOR DETECTING AN OBJECT IN AN IMAGE
Qi et al.2014|Cascaded cast shadow detection method in surveillance scenes
KR101852476B1|2018-06-04|Multiple-wavelength images analysis electro optical system for detection of accident ship and submerged person and analysis method thereof
Rüfenacht et al.2013|Automatic detection of dust and scratches in silver halide film using polarized dark-field illumination
Kaeli2013|Computational strategies for understanding underwater optical image datasets
Kröhnert2016|Automatic waterline extraction from smartphone images
Fayaz et al.2021|Underwater image restoration: A state‐of‐the‐art review
Tarkowski et al.2015|Efficient algorithm for blinking LED detection dedicated to embedded systems equipped with high performance cameras
Ghahremani et al.2018|Cascaded CNN method for far object detection in outdoor surveillance
Chen et al.2020|Color–depth multi-task learning for object detection in haze
CN109815863B|2021-10-12|Smoke and fire detection method and system based on deep learning and image recognition
US10943321B2|2021-03-09|Method and system for processing image data
Camp2013|Evaluation of object detection algorithms for ship detection in the visible spectrum
Kiran et al.2021|Implementation of Color harmony and Under water fusion utilizing Image Enhancement and Color Restoration strategies
同族专利:
公开号 | 公开日
US10049434B2|2018-08-14|
FR3042630B1|2019-08-30|
US20170109891A1|2017-04-20|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题

US5046119A|1990-03-16|1991-09-03|Apple Computer, Inc.|Method and apparatus for compressing and decompressing color video data with an anti-aliasing mode|
US6160902A|1997-10-10|2000-12-12|Case Corporation|Method for monitoring nitrogen status using a multi-spectral imaging system|
US20010016053A1|1997-10-10|2001-08-23|Monte A. Dickson|Multi-spectral imaging sensor|
US6539126B1|1998-04-17|2003-03-25|Equinox Corporation|Visualization of local contrast for n-dimensional image data|
US20040008381A1|1999-07-16|2004-01-15|Jacob Steve A.|System and method for converting color data to gray data|
US6567537B1|2000-01-13|2003-05-20|Virginia Commonwealth University|Method to assess plant stress using two narrow red spectral bands|
US7359553B1|2001-02-16|2008-04-15|Bio-Key International, Inc.|Image identification system|
US6922485B2|2001-12-06|2005-07-26|Nec Corporation|Method of image segmentation for object-based image retrieval|
US7430303B2|2002-03-29|2008-09-30|Lockheed Martin Corporation|Target detection method and system|
JP4319857B2|2003-05-19|2009-08-26|株式会社日立製作所|How to create a map|
US7805005B2|2005-08-02|2010-09-28|The United States Of America As Represented By The Secretary Of The Army|Efficient imagery exploitation employing wavelet-based feature indices|
IL172480A|2005-12-08|2011-11-30|Amir Zahavi|Method for automatic detection and classification of objects and patterns in low resolution environments|
US20130163822A1|2006-04-04|2013-06-27|Cyclops Technologies, Inc.|Airborne Image Capture and Recognition System|
US20080144013A1|2006-12-01|2008-06-19|Institute For Technology Development|System and method for co-registered hyperspectral imaging|
US20090271719A1|2007-04-27|2009-10-29|Lpa Systems, Inc.|System and method for analysis and display of geo-referenced imagery|
US20090022359A1|2007-07-19|2009-01-22|University Of Delaware|Vegetation index image generation methods and systems|
US8260048B2|2007-11-14|2012-09-04|Exelis Inc.|Segmentation-based image processing system|
WO2010144877A1|2009-06-11|2010-12-16|Petroalgae, Llc|Vegetation indices for measuring multilayer microcrop density and growth|
US8170272B1|2010-02-23|2012-05-01|The United States Of America As Represented By The Secretary Of The Navy|Method for classifying vessels using features extracted from overhead imagery|
WO2012010873A1|2010-07-21|2012-01-26|Mbda Uk Limited|Image processing method|
US8731305B1|2011-03-09|2014-05-20|Google Inc.|Updating map data using satellite imagery|
US20120330716A1|2011-06-27|2012-12-27|Cadio, Inc.|Triggering collection of consumer data from external data sources based on location data|
US20130039574A1|2011-08-09|2013-02-14|James P. McKay|System and method for segmenting water, land and coastline from remote imagery|
US20140226024A1|2013-02-08|2014-08-14|Kutta Technologies, Inc.|Camera control in presence of latency|
US9305233B2|2013-09-12|2016-04-05|The Boeing Company|Isotropic feature matching|
US8958602B1|2013-09-27|2015-02-17|The United States Of America As Represented By The Secretary Of The Navy|System for tracking maritime domain targets from full motion video|US20170206644A1|2016-01-15|2017-07-20|Abl Ip Holding Llc|Light fixture fingerprint detection for position estimation|
US20170206658A1|2016-01-15|2017-07-20|Abl Ip Holding Llc|Image detection of mapped features and identification of uniquely identifiable objects for position estimation|
US10178293B2|2016-06-22|2019-01-08|International Business Machines Corporation|Controlling a camera using a voice command and image recognition|
CL2016003302A1|2016-12-22|2017-09-15|Univ Chile|Radiovision device|
WO2019132548A1|2017-12-28|2019-07-04|Samsung Electronics Co., Ltd.|Method and apparatus for processing image and computer program product thereof|
EP3654233A1|2018-11-15|2020-05-20|BSB Artificial Intelligence GmbH|System and method for identifying an object in water|
CN110033468B|2019-03-21|2020-01-17|孙凯|Object removal detection method and device and terminal equipment|
US11176193B2|2019-10-24|2021-11-16|Adobe Inc.|Search input generation for image search|
法律状态:
2017-09-25| PLFP| Fee payment|Year of fee payment: 2 |
2018-09-25| PLFP| Fee payment|Year of fee payment: 3 |
2018-12-07| PLSC| Publication of the preliminary search report|Effective date: 20181207 |
2019-09-25| PLFP| Fee payment|Year of fee payment: 4 |
2020-09-25| PLFP| Fee payment|Year of fee payment: 5 |
2021-09-27| PLFP| Fee payment|Year of fee payment: 6 |
优先权:
申请号 | 申请日 | 专利标题
US14884066|2015-10-15|
US14/884,066|US10049434B2|2015-10-15|2015-10-15|Systems and methods for object detection|
[返回顶部]